768 research outputs found

    Content Delivery and Sharing in Federated Cloud Storage

    Get PDF
    Cloud-based storage is becoming a cost-effective solution for agencies, hospitals, government instances and scientific centers to deliver and share contents to/with a set of end-users. However, reliability, privacy and lack of control are the main problems that arise when contracting content delivery services with a single cloud storage provider. This paper presents the implementation of a storage system for content delivery and sharing in federated cloud storage networks. This system virtualizes the storage resources of a set of organizations as a single federated system, which is in charge of the content storage. The architecture includes a metadata management layer to keep the content delivery control in-house and a storage synchronization worker/monitor to keep the state of storage resources in the federation as well as to send contents near to the end-users. It also includes a redundancy layer based on a multi-threaded engine that enables the system to withstand failures in the federated network. We developed a prototype based on this scheme as a proof of concept. The experimental evaluation shows the benefits of building content delivery systems in federated cloud environments, in terms of performance, reliability and profitability of the storage space.The work presented in this paper has been partially supported by EU under the COST programme Action IC1305, Network for Sustainable Ultrascale Computing (NESUS)

    Improving performance and capacity utilization in cloud storage for content delivery and sharing services

    Get PDF
    Content delivery and sharing (CDS) is a popular and cost effective cloud-based service for organizations to deliver/share contents to/with end-users, partners and insider users. This type of service improves the data availability and I/O performance by producing and distributing replicas of shared contents. However, such a technique increases the storage/network resources utilization. This paper introduces a threefold methodology to improve the trade-off between I/O performance and capacity utilization of cloud storage for CDS services. This methodology includes: i) Definition of a classification model for identifying types of users and contents by analyzing their consumption/ demand and sharing patterns, ii) Usage of the classification model for defining content availability and load balancing schemes, and iii) Integration of a dynamic availability scheme into a cloud based CDS system. Our method was implemented ¿This work was partially supported by the Spanish Ministry of Economy, Industry and Competitiveness under the grant TIN2016-79637-P ”Towards Unification of HPC and Big Data Paradigms

    CloudChain: A novel distribution model for digital products based on supply chain principles

    Get PDF
    Cloud computing is a popular outsourcing solution for organizations to support the information management during the life cycle of digital information goods. However, outsourcing management with a public provider results in a lack of control over digital products, which could produce incidents such as data unavailability during service outages, violations of confidentiality and/or legal issues. This paper presents a novel distribution model of digital products inspired by lean supply chain principles called CloudChain, which has been designed to support the information management during digital product lifecycle. This model enables connected networks of customers, partners and organizations to conduct the stages of digital product lifecycle as value chains. Virtual distribution channels are created over cloud resources for applications of organizations to deliver digital products to applications of partners through a seamless information flow. A configurable packing and logistic service was developed to ensure confidentiality and privacy in the product delivery by using encrypted packs. A chain management architecture enables organizations to keep tighter control over their value chains, distribution channels and digital products. CloudChain software instances were integrated to an information management system of a space agency. In an experimental evaluation CloudChain prototype was evaluated in a private cloud where the feasibility of applying supply chain principles to the delivery of digital products in terms of efficiency, flexibility and security was revealed.This work was partially funded by the sectorial fund of research, technological development and innovation in space activities of the Mexican National Council of Science and Technology (CONACYT) and the Mexican Space Agency (AEM), project No. 262891

    SkyCDS: A resilient content delivery service based on diversified cloud storage

    Get PDF
    Cloud-based storage is a popular outsourcing solution for organizations to deliver contents to end-users. However, there is a need for contingency plans to ensure service provision when the provider either suffers outages or is going out of business. This paper presents SkyCDS: a resilient content delivery service based on a publish/subscribe overlay over diversified cloud storage. SkyCDS splits the content delivery into metadata and content storage flow layers. The metadata flow layer is based on publish-subscribe patterns for insourcing the metadata control back to content owner. The storage layer is based on dispersal information over multiple cloud locations with which organizations outsource content storage in a controlled manner. In SkyCDS, the content dispersion is performed on the publisher side and the content retrieving process on the end-user side (the subscriber), which reduces the load on the organization side only to metadata management. SkyCDS also lowers the overhead of the content dispersion and retrieving processes by taking advantage of multi-core technology. A new allocation strategy based on cloud storage diversification and failure masking mechanisms minimize side effects of temporary, permanent cloud-based service outages and vendor lock-in. We developed a SkyCDS prototype that was evaluated by using synthetic workloads and a study case with real traces. Publish/subscribe queuing patterns were evaluated by using a simulation tool based on characterized metrics taken from experimental evaluation. The evaluation revealed the feasibility of SkyCDS in terms of performance, reliability and storage space profitability. It also shows a novel way to compare the storage/delivery options through risk assessment. (C) 2015 Elsevier B.V. All rights reserved.The work presented in this paper has been partially supported by EU under the COST programme Action IC1305, Network for Sustainable Ultrascale Computing (NESUS)

    CloudBench: an integrated evaluation of VM placement algorithms in clouds

    Get PDF
    A complex and important task in the cloud resource management is the efficient allocation of virtual machines (VMs), or containers, in physical machines (PMs). The evaluation of VM placement techniques in real-world clouds can be tedious, complex and time-consuming. This situation has motivated an increasing use of cloud simulators that facilitate this type of evaluations. However, most of the reported VM placement techniques based on simulations have been evaluated taking into account one specific cloud resource (e.g., CPU), whereas values often unrealistic are assumed for other resources (e.g., RAM, awaiting times, application workloads, etc.). This situation generates uncertainty, discouraging their implementations in real-world clouds. This paper introduces CloudBench, a methodology to facilitate the evaluation and deployment of VM placement strategies in private clouds. CloudBench considers the integration of a cloud simulator with a real-world private cloud. Two main tools were developed to support this methodology, a specialized multi-resource cloud simulator (CloudBalanSim), which is in charge of evaluating VM placement techniques, and a distributed resource manager (Balancer), which deploys and tests in a real-world private cloud the best VM placement configurations that satisfied user requirements defined in the simulator. Both tools generate feedback information, from the evaluation scenarios and their obtained results, which is used as a learning asset to carry out intelligent and faster evaluations. The experiments implemented with the CloudBench methodology showed encouraging results as a new strategy to evaluate and deploy VM placement algorithms in the cloud.This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness under the Grant TIN2016-79637-P “Towards Unifcation of HPC and Big Data Paradigms” and by the Mexican Council of Science and Technology (CONACYT) through a Ph.D. Grant (No. 212677)

    Impact of external industrial sources on the regional and local SO2 and O3 levels of the Mexico megacity

    Get PDF
    The air quality of megacities can be influenced by external emission sources on both global and regional scales. At the same time their outflow emissions can exert an impact to the surrounding environment. The present study evaluates an SO2 peak observed on 24 March 2006 at the suburban supersite T1 and at ambient air quality monitoring stations located in the northern region of the Mexico City Metropolitan Area (MCMA) during the Megacity Initiative: Local and Global Research Observations (MILAGRO) field campaign. We found that this peak could be related to an important episodic emission event coming from Tizayuca region, northeast of the MCMA. Back-trajectory analyses suggest that the emission event started in the early morning at 04:00 LST and lasted for about 9 h. The estimated emission rate is about 2 kg s[superscript −1]. To the best of our knowledge, sulfur dioxide emissions from the Tizayuca region have not been considered in previous studies. This finding suggests the possibility of "overlooked" emission sources in this region that could influence the air quality of the MCMA. This further motivated us to study the cement plants, including those in the state of Hidalgo and in the State of Mexico. It was found that they can contribute to the SO2 levels in the northeast (NE) region of the basin (about 42%), at the suburban supersite T1 (41%) and that at some monitoring stations their contribution can be even higher than the contribution from the Tula Industrial Complex (TIC). The contribution of the Tula Industrial Complex to regional ozone levels is estimated. The model suggests low contribution to the MCMA (1 to 4 ppb) and slightly higher contribution at the suburban T1 (6 ppb) and rural T2 (5 ppb) supersites. However, the contribution could be as high as 10 ppb in the upper northwest region of the basin and in the southwest and south-southeast regions of the state of Hidalgo. In addition, the results indicated that the ozone plume could also be transported to northwest Tlaxcala, eastern Hidalgo, and farther northeast of the State of Mexico, but with rather low values. A first estimate of the potential contribution from flaring activities to regional ozone levels is presented. Results suggest that up to 30% of the total regional ozone from TIC could be related to flaring activities. Finally, the influence on SO2 levels from technological changes in the existing refinery is briefly discussed. These changes are due to the upcoming construction of a new refinery in Tula. The combination of emission reductions in the power plant, the refinery and in local sources in the MCMA could result in higher reductions on the average SO[subscript 2] concentration. Reductions in external sources tend to affect more the northern part of the basin (−16 to −46%), while reductions of urban sources in the megacity tend to diminish SO[subscript 2] levels substantially in the central, southwest, and southeast regions (−31 to −50%).United States. Dept. of Energy (Atmospheric System Research Program, Contract DE-AC06-76RLO 1830)National Science Foundation (U.S.) (NSF award AGS-1135141)Consejo Nacional de Ciencia y Tecnología (Mexico

    Kulla, a container-centric construction model for building infrastructure-agnostic distributed and parallel applications

    Get PDF
    This paper presents the design, development, and implementation of Kulla, a virtual container-centric construction model that mixes loosely coupled structures with a parallel programming model for building infrastructure-agnostic distributed and parallel applications. In Kulla, applications, dependencies and environment settings, are mapped with construction units called Kulla-Blocks. A parallel programming model enables developers to couple those interoperable structures for creating constructive structures named Kulla-Bricks. In these structures, continuous dataflow and parallel patterns can be created without modifying the code of applications. Methods such as Divide&Containerize (data parallelism), Pipe&Blocks (streaming), and Manager/Block (task parallelism) were developed to create Kulla-Bricks. Recursive combinations of Kulla instances can be grouped in deployment structures called Kulla-Boxes, which are encapsulated into VCs to create infrastructure-agnostic parallel and/or distributed applications. Deployment strategies were created for Kulla-Boxes to improve the IT resource profitability. To show the feasibility and flexibility of this model, solutions combining real-world applications were implemented by using Kulla instances to compose parallel and/or distributed system deployed on different IT infrastructures. An experimental evaluation based on use cases solving satellite and medical image processing problems revealed the efficiency of Kulla model in comparison with some traditional state-of-the-art solutions.This work has been partially supported by the EU project "ASPIDE: Exascale Programing Models for Extreme Data Processing" under grant 801091 and the project "CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones" S2018/TCS-4423 from Madrid Regional Government

    A multicenter, randomized study of argatroban versus heparin as adjunct to tissue plasminogen activator (TPA) in acute myocardial infarction: myocardial infarction with Novastan and TPA (MINT) study

    Get PDF
    AbstractOBJECTIVESThis study examined the effect of a small-molecule, direct thrombin inhibitor, argatroban, on reperfusion induced by tissue plasminogen activator (TPA) in patients with acute myocardial infarction (AMI).BACKGROUNDThrombin plays a crucial role in thrombosis and thrombolysis. In vitro and in vivo studies have shown that argatroban has advantages over heparin for the inhibition of clot-bound thrombin and for the enhancement of thrombolysis with TPA.METHODSOne hundred and twenty-five patients with AMI within 6 h were randomized to heparin, low-dose argatroban or high-dose argatroban in addition to TPA. The primary end point was the rate of thrombolysis in myocardial infarction (TIMI) grade 3 flow at 90 min.RESULTSTIMI grade 3 flow was achieved in 42.1% of heparin, 56.8% of low-dose argatroban (p = 0.20 vs. heparin) and 58.7% of high-dose argatroban patients (p = 0.13 vs. heparin). In patients presenting after 3 h, TIMI grade 3 flow was significantly more frequent in high-dose argatroban versus heparin patients: 57.1% versus 20.0% (p = 0.03 vs. heparin). Major bleeding was observed in 10.0% of heparin, and in 2.6% and 4.3% of low-dose and high-dose argatroban patients, respectively. The composite of death, recurrent myocardial infarction, cardiogenic shock or congestive heart failure, revascularization and recurrent ischemia at 30 days occurred in 37.5% of heparin, 32.0% of low-dose argatroban and 25.5% of high-dose argatroban patients (p = 0.23).CONCLUSIONSArgatroban, as compared with heparin, appears to enhance reperfusion with TPA in patients with AMI, particularly in those patients with delayed presentation. The incidences of major bleeding and adverse clinical outcome were lower in the patients receiving argatroban

    Flow-Dependent Mass Transfer May Trigger Endothelial Signaling Cascades

    Get PDF
    It is well known that fluid mechanical forces directly impact endothelial signaling pathways. But while this general observation is clear, less apparent are the underlying mechanisms that initiate these critical signaling processes. This is because fluid mechanical forces can offer a direct mechanical input to possible mechanotransducers as well as alter critical mass transport characteristics (i.e., concentration gradients) of a host of chemical stimuli present in the blood stream. However, it has recently been accepted that mechanotransduction (direct mechanical force input), and not mass transfer, is the fundamental mechanism for many hemodynamic force-modulated endothelial signaling pathways and their downstream gene products. This conclusion has been largely based, indirectly, on accepted criteria that correlate signaling behavior and shear rate and shear stress, relative to changes in viscosity. However, in this work, we investigate the negative control for these criteria. Here we computationally and experimentally subject mass-transfer limited systems, independent of mechanotransduction, to the purported criteria. The results showed that the negative control (mass-transfer limited system) produced the same trends that have been used to identify mechanotransduction-dominant systems. Thus, the widely used viscosity-related shear stress and shear rate criteria are insufficient in determining mechanotransduction-dominant systems. Thus, research should continue to consider the importance of mass transfer in triggering signaling cascades
    • 

    corecore